! Rubrics experiment with additional coding tasks

20 Apr 2026

Use tree aid? choose a map and work on the bundles only on that map

Choose a dataset (maybe example-file or lonely in London to compare between:

  • our standard AI-coding approach (steve suggested mermaid, but I'm not really used to it, so I might go with our normal approach with sentiment and type columns)
  • using rubrics approach (come up with one after reading the articles) adding like 3 more columns

  • Experiment/Phase 1: Check how much the links coverage decreases with adding new columns/coding tasks i.e rubrics

Phase 2: Robustness/Certainness: how much certain about the causal claim is the respondent?

My question: do I also assess the links after raw coding or only check the links coverage with the new columns?

look at the aggregate numbers on each bundle —> like citation and source count and average sentiment